Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We introduce a tunable GAN, called α-GAN, parameterized by α∈(0, ∞], which interpolates between various f-GANs and Integral Probability Metric based GANs (under constrained discriminator set). We construct α− GAN using a supervised loss function, namely, α− loss, which is a tunable loss function capturing several canonical losses. We show that α− GAN is intimately related to the Arimoto divergence, which was first proposed by Österriecher (1996), and later studied by Liese and Vajda (2006). We posit that the holistic understanding that α− GAN introduces will have practical benefits of addressing both the issues of vanishing gradients and mode collapses.more » « less
-
null (Ed.)We consider a problem of guessing, wherein an adversary is interested in knowing the value of the realization of a discrete random variable X on observing another correlated random variable Y. The adversary can make multiple (say, k) guesses. The adversary's guessing strategy is assumed to minimize a-loss, a class of tunable loss functions parameterized by a. It has been shown before that this loss function captures well known loss functions including the exponential loss (a = 1/2), the log-loss (a = 1) and the 0–1 loss (a = ∞). We completely characterize the optimal adversarial strategy and the resulting expected α-loss, thereby recovering known results for a = ∞. We define an information leakage measure from the k-guesses setup and derive a condition under which the leakage is unchanged from a single guess.more » « less
An official website of the United States government
